258 research outputs found
Fine-Grained Complexity Analysis of Two Classic TSP Variants
We analyze two classic variants of the Traveling Salesman Problem using the
toolkit of fine-grained complexity. Our first set of results is motivated by
the Bitonic TSP problem: given a set of points in the plane, compute a
shortest tour consisting of two monotone chains. It is a classic
dynamic-programming exercise to solve this problem in time. While the
near-quadratic dependency of similar dynamic programs for Longest Common
Subsequence and Discrete Frechet Distance has recently been proven to be
essentially optimal under the Strong Exponential Time Hypothesis, we show that
bitonic tours can be found in subquadratic time. More precisely, we present an
algorithm that solves bitonic TSP in time and its bottleneck
version in time. Our second set of results concerns the popular
-OPT heuristic for TSP in the graph setting. More precisely, we study the
-OPT decision problem, which asks whether a given tour can be improved by a
-OPT move that replaces edges in the tour by new edges. A simple
algorithm solves -OPT in time for fixed . For 2-OPT, this is
easily seen to be optimal. For we prove that an algorithm with a runtime
of the form exists if and only if All-Pairs
Shortest Paths in weighted digraphs has such an algorithm. The results for
may suggest that the actual time complexity of -OPT is
. We show that this is not the case, by presenting an algorithm
that finds the best -move in time for
fixed . This implies that 4-OPT can be solved in time,
matching the best-known algorithm for 3-OPT. Finally, we show how to beat the
quadratic barrier for in two important settings, namely for points in the
plane and when we want to solve 2-OPT repeatedly.Comment: Extended abstract appears in the Proceedings of the 43rd
International Colloquium on Automata, Languages, and Programming (ICALP 2016
EPR-based ghost imaging using a single-photon-sensitive camera
Correlated photon imaging, popularly known as ghost imaging, is a technique whereby an image is formed from light that has never interacted with the object. In ghost imaging experiments, two correlated light fields are produced. One of these fields illuminates the object, and the other field is measured by a spatially resolving detector. In the quantum regime, these correlated light fields are produced by entangled photons created by spontaneous parametric down-conversion. To date, all correlated photon ghost imaging experiments have scanned a single-pixel detector through the field of view to obtain spatial information. However, scanning leads to poor sampling efficiency, which scales inversely with the number of pixels, N, in the image. In this work, we overcome this limitation by using a time-gated camera to record the single-photon events across the full scene. We obtain high-contrast images, 90%, in either the image plane or the far field of the photon pair source, taking advantage of the Einstein–Podolsky–Rosen-like correlations in position and momentum of the photon pairs. Our images contain a large number of modes, >500, creating opportunities in low-light-level imaging and in quantum information processing
On Embeddability of Buses in Point Sets
Set membership of points in the plane can be visualized by connecting
corresponding points via graphical features, like paths, trees, polygons,
ellipses. In this paper we study the \emph{bus embeddability problem} (BEP):
given a set of colored points we ask whether there exists a planar realization
with one horizontal straight-line segment per color, called bus, such that all
points with the same color are connected with vertical line segments to their
bus. We present an ILP and an FPT algorithm for the general problem. For
restricted versions of this problem, such as when the relative order of buses
is predefined, or when a bus must be placed above all its points, we provide
efficient algorithms. We show that another restricted version of the problem
can be solved using 2-stack pushall sorting. On the negative side we prove the
NP-completeness of a special case of BEP.Comment: 19 pages, 9 figures, conference version at GD 201
A framework for models of movement in geographic space
This article concerns the theoretical foundations of movement informatics. We discuss general frameworks in which models of spatial movement may be developed. In particular, the article considers the object–field and Lagrangian–Eulerian dichotomies, and the SNAP/SPAN ontologies of the dynamic world, and classifies the variety of informatic structures according to these frameworks. A major challenge is transitioning between paradigms. Usually data is captured with respect to one paradigm but can usefully be represented in another. We discuss this process in formal terms and then describe experiments that we performed to show feasibility. It emerges that observational granularity plays a crucial role in these transitions
Finding turning-points in ultra-high-resolution animal movement data
1. Recent advances in biologging have resulted in animal location data at unprecedentedly high temporal resolutions, sometimes many times per second. However, many current methods for analysing animal movement (e.g. step selection analysis or state-space modelling) were developed with lower-resolution data in mind. To make such methods usable with high-resolution data, we require techniques to identify features within the trajectory where movement deviates from a straight line.
2. We propose that the intricacies of movement paths, and particularly turns, reflect decisions made by animals so that turn points are particularly relevant for behavioural ecologists. As such, we introduce a fast, accurate algorithm for inferring turning-points in high-resolution data. For analysing big data, speed and scalability are vitally important. We test our algorithm on simulated data, where varying amounts of noise were added to paths of straight-line segments interspersed with turns. We also demonstrate our algorithm on data of free-ranging oryx (Oryx leucoryx). We compare our algorithm to existing statistical techniques for break-point inference.
3. The algorithm scales linearly and can analyse several hundred-thousand data-points in a few seconds on a mid-range desktop computer. It identified turnpoints in simulated data with complete accuracy when the noise in the headings had a standard deviation of 8 degrees, well within the tolerance of many modern biologgers. It has comparable accuracy to the existing algorithms tested, and is up to three orders of magnitude faster.
4. Our algorithm, freely available in R and Python, serves as an initial step in processing ultra high-resolution animal movement data, resulting in a rarefied path that can be used as an input into many existing step-and-turn methods of analysis. The resulting path consists of points where the animal makes a clear turn, and thereby provides valuable data on decisions underlying movement patterns. As such, it provides an important breakthrough required as a starting point for analysing sub-second resolution data
Small grid embeddings of 3-polytopes
We introduce an algorithm that embeds a given 3-connected planar graph as a
convex 3-polytope with integer coordinates. The size of the coordinates is
bounded by . If the graph contains a triangle we can
bound the integer coordinates by . If the graph contains a
quadrilateral we can bound the integer coordinates by . The
crucial part of the algorithm is to find a convex plane embedding whose edges
can be weighted such that the sum of the weighted edges, seen as vectors,
cancel at every point. It is well known that this can be guaranteed for the
interior vertices by applying a technique of Tutte. We show how to extend
Tutte's ideas to construct a plane embedding where the weighted vector sums
cancel also on the vertices of the boundary face
Recommended from our members
Modeling Checkpoint-Based Movement with the Earth Mover's Distance
Movement data comes in various forms, including trajectory data and checkpoint data. While trajectories give detailed information about the movement of individual entities, checkpoint data in its simplest form does not give identities, just counts at checkpoints. However, checkpoint data is of increasing interest since it is readily available due to privacy reasons and as a by-product of other data collection. In this paper we propose to use the Earth Mover’s Distance as a versatile tool to reconstruct individual movements or flow based on checkpoint counts at different times. We analyze the modeling possibilities and provide experiments that validate model predictions, based on coarse-grained aggregations of data about actual movements of couriers in London, UK. While we cannot expect to reconstruct precise individual movements from highly granular checkpoint data, the evaluation does show that the approach can generate meaningful estimates of object movements.
B. Speckmann and K. Verbeek are supported by the Netherlands Organisation for Scientific Research (NWO) under project nos. 639.023.208 and 639.021.541, respectively. This paper arose from work initiated at Dagstuhl seminar 12512 “Representation, analysis and visualization of moving objects”, December 2012. The authors gratefully acknowledge Schloss Dagstuhl for their support
- …